Inexact Nonconvex Newton-Type Methods

نویسندگان

چکیده

The paper aims to extend the theory and application of nonconvex Newton-type methods, namely trust region cubic regularization, settings in which, addition solution subproblems, gradient Hessian objective function are approximated. Using certain conditions on such approximations, establishes optimal worst-case iteration complexities as exact counterparts. This is part a broader research program designing, analyzing, implementing efficient second-order optimization methods for large-scale machine learning applications. authors were based at UC Berkeley when idea project was conceived. first two PhD students, third author postdoc, all supervised by fourth author.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Inexact Newton Dogleg Methods

The dogleg method is a classical trust-region technique for globalizing Newton’s method. While it is widely used in optimization, including large-scale optimization via truncatedNewton approaches, its implementation in general inexact Newton methods for systems of nonlinear equations can be problematic. In this paper, we first outline a very general dogleg method suitable for the general inexac...

متن کامل

Quasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization

Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.

متن کامل

Globally Convergent Inexact Newton Methods

Inexact Newton methods for finding a zero of F 1 1 are variations of Newton's method in which each step only approximately satisfies the linear Newton equation but still reduces the norm of the local linear model of F. Here, inexact Newton methods are formulated that incorporate features designed to improve convergence from arbitrary starting points. For each method, a basic global convergence ...

متن کامل

Convergence analysis of inexact proximal Newton-type methods

We study inexact proximal Newton-type methods to solve convex optimization problems in composite form: minimize x∈Rn f(x) := g(x) + h(x), where g is convex and continuously differentiable and h : R → R is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. Proximal Newton-type methods require the solution of subproblems to obtain the search ...

متن کامل

An inexact Newton method for nonconvex equality constrained optimization

We present a matrix-free line search algorithm for large-scale equality constrained optimization that allows for inexact step computations. For strictly convex problems, the method reduces to the inexact sequential quadratic programming approach proposed by Byrd et al. [SIAM J. Optim. 19(1) 351–369, 2008]. For nonconvex problems, the methodology developed in this paper allows for the presence o...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: INFORMS journal on optimization

سال: 2021

ISSN: ['2575-1484', '2575-1492']

DOI: https://doi.org/10.1287/ijoo.2019.0043